7,301 research outputs found
Recommended from our members
On defining partition entropy by inequalities
Partition entropy is the numerical metric of uncertainty within
a partition of a finite set, while conditional entropy measures the degree of
difficulty in predicting a decision partition when a condition partition is
provided. Since two direct methods exist for defining conditional entropy
based on its partition entropy, the inequality postulates of monotonicity,
which conditional entropy satisfies, are actually additional constraints on
its entropy. Thus, in this paper partition entropy is defined as a function
of probability distribution, satisfying all the inequalities of not only partition
entropy itself but also its conditional counterpart. These inequality
postulates formalize the intuitive understandings of uncertainty contained
in partitions of finite sets.We study the relationships between these inequalities,
and reduce the redundancies among them. According to two different
definitions of conditional entropy from its partition entropy, the convenient
and unified checking conditions for any partition entropy are presented, respectively.
These properties generalize and illuminate the common nature
of all partition entropies
Orthogonal learning particle swarm optimization
Particle swarm optimization (PSO) relies on its
learning strategy to guide its search direction. Traditionally,
each particle utilizes its historical best experience and its neighborhood’s
best experience through linear summation. Such a
learning strategy is easy to use, but is inefficient when searching
in complex problem spaces. Hence, designing learning strategies
that can utilize previous search information (experience) more
efficiently has become one of the most salient and active PSO
research topics. In this paper, we proposes an orthogonal learning
(OL) strategy for PSO to discover more useful information that
lies in the above two experiences via orthogonal experimental
design. We name this PSO as orthogonal learning particle swarm
optimization (OLPSO). The OL strategy can guide particles to
fly in better directions by constructing a much promising and
efficient exemplar. The OL strategy can be applied to PSO with
any topological structure. In this paper, it is applied to both global
and local versions of PSO, yielding the OLPSO-G and OLPSOL
algorithms, respectively. This new learning strategy and the
new algorithms are tested on a set of 16 benchmark functions, and
are compared with other PSO algorithms and some state of the
art evolutionary algorithms. The experimental results illustrate
the effectiveness and efficiency of the proposed learning strategy
and algorithms. The comparisons show that OLPSO significantly
improves the performance of PSO, offering faster global convergence,
higher solution quality, and stronger robustness
New Approach on the General Shape Equation of Axisymmetric Vesicles
The general Helfrich shape equation determined by minimizing the curvature
free energy describes the equilibrium shapes of the axisymmetric lipid bilayer
vesicles in different conditions. It is a non-linear differential equation with
variable coefficients. In this letter, by analyzing the unique property of the
solution, we change this shape equation into a system of the two differential
equations. One of them is a linear differential equation. This equation system
contains all of the known rigorous solutions of the general shape equation. And
the more general constraint conditions are found for the solution of the
general shape equation.Comment: 8 pages, LaTex, submit to Mod. Phys. Lett.
On the Three-dimensional Lattice Model
Using the restricted star-triangle relation, it is shown that the -state
spin integrable model on a three-dimensional lattice with spins interacting
round each elementary cube of the lattice proposed by Mangazeev, Sergeev and
Stroganov is a particular case of the Bazhanov-Baxter model.Comment: 8 pages, latex, 4 figure
- …